Goto

Collaborating Authors

 saddle-point formulation



the unanimous suggestion by three reviewers, we will provide a simulation section in the final version

Neural Information Processing Systems

We appreciate the valuable comments from the reviewers, which will lead to a largely improved final paper. F. The main difference is that we use a variational characterization of conditional expectation while dual IV resort to Our derivation has a natural connection to generalized method of moments. We will define it explicitly in the final draft. The bound of Theorem 4.2 As for the second comment, we emphasize that it should be We can see saddle-point formulation eliminates the need of double samples. This point was briefly mentioned in the beginning of Sec 2.2 and will be elaborated on in the final draft.




Adversarial Deep Learning for Robust Detection of Binary Encoded Malware

Al-Dujaili, Abdullah, Huang, Alex, Hemberg, Erik, O'Reilly, Una-May

arXiv.org Machine Learning

Malware is constantly adapting in order to avoid detection. Model based malware detectors, such as SVM and neural networks, are vulnerable to so-called adversarial examples which are modest changes to detectable malware that allows the resulting malware to evade detection. Continuous-valued methods that are robust to adversarial examples of images have been developed using saddle-point optimization formulations. We are inspired by them to develop similar methods for the discrete, e.g. binary, domain which characterizes the features of malware. A specific extra challenge of malware is that the adversarial examples must be generated in a way that preserves their malicious functionality. We introduce methods capable of generating functionally preserved adversarial malware examples in the binary domain. Using the saddle-point formulation, we incorporate the adversarial examples into the training of models that are robust to them. We evaluate the effectiveness of the methods and others in the literature on a set of Portable Execution~(PE) files. Comparison prompts our introduction of an online measure computed during training to assess general expectation of robustness.